The following article represents a non-exhaustive summary of key elements emerging from the discussions held during the second session of the Military AI, Peace & Security (MAPS) Dialogues, a webinar series convened by the UN Office for Disarmament Affairs with the support of the Republic of Korea to foster inclusive multilateral dialogue on military applications of AI. Learn more about the event here or read the article on the first Dialogue on opportunities and risks here.
Artificial intelligence (AI) in the military domain continues to evolve rapidly, presenting potential opportunities alongside risks for international peace and security. The second installment of the Military AI for Peace & Security (MAPS) Dialogues, Capacity Building and International Cooperation on AI in the Military Domain, focused on mapping existing national, regional, subregional, and global efforts to ensure the responsible application of AI in the military domain. The discussions also sought to identify needs, priorities, and conditions for capacity building and international cooperation on responsible development and use, as well as exchange good practices in national strategies, legislations, and policies. The event was open to representatives of States, international and regional organizations, civil society, the scientific community and industry. This article captures insights from these discussions.
Mapping Existing Efforts
Panelists highlighted national and regional efforts already shaping the landscape of responsible military AI.
Kenya emerged as a key example, actively developing and implementing a comprehensive national AI strategy. Major Jamal Hassan from Kenya’s Ministry of Defence described the strategy underscoring its focus on trustworthiness, ethical considerations, and responsible deployment of AI within military operations. He emphasized Kenya’s ambition to position itself as a regional leader, facilitating multi-layered cooperation across East Africa. These efforts include the creation of a unified data governance framework to address potential misuse of AI, especially given the region’s ongoing security challenges. Major Hassan noted that Kenya’s leadership role extends to continent-wide efforts, as demonstrated by its active participation in African Union dialogues focused on moving from strategy formulation to practical implementation.
From Latin America, Chile’s Valeria Chiappini Koscina described how the region has collectively taken steps to address military AI. Reflecting historical commitments like maintaining Latin America as a nuclear-weapon-free zone and a zone of peace, regional discussions now focus on extending these norms into new domains, such as AI. Chile recently updated its national AI policy to emphasize human-centered principles and ethical transparency, although Ms. Chiappini Koscina noted a clear gap—current legislation explicitly excludes military AI from regulation, creating a dual-use ambiguity that requires urgent attention.
Civil society representative Sumaya Nur Adan drew lessons from civilian AI initiatives, citing examples such as the UK-Kenya bilateral AI partnership and open-source communities like Masakane. Ms. Nur Aden underlined the importance of addressing infrastructure divides—specifically computing power resources, evaluation tools, and local data ecosystems—as critical elements that directly influence capacity building and equitable participation in AI development, both civilian and military.

Identifying Needs and Priorities
Capacity building stood out prominently as a crucial area demanding immediate attention and practical actions for meaningful international cooperation.
Panelists identified infrastructure gaps, such as the lack of accessible computing power, robust evaluation mechanisms and high-quality data, as primary considerations. Ms. Nur Adan emphasized the need for “shared and aligned infrastructure” that would allow States, particularly those with fewer resources, to develop and deploy AI responsibly. She also stressed the importance of trust and verification mechanisms, such as audits and third-party evaluations, as fundamental to secure technology transfer and sustainable international collaboration.
Agreeing with these points regarding infrastructure, Major Hassan also highlighted the human dimension, particularly talent retention and training. He stressed the importance of comprehensive workforce development, including specialized AI training tailored specifically to military personnel, emphasizing that technical skills must be paired with an understanding of ethical and algorithmic accountability. This human-centric capacity building would ensure military personnel are adequately prepared to handle the complexities and limitations inherent in AI-driven decision-making under high-stress conditions.
Ms. Chiappini elaborated on the need for cross-sectoral communication within governments, describing Chile’s ongoing challenges to effectively collaborate on AI governance despite ministerial silos. Effective capacity building, she argued, should involve breaking down these barriers to ensure cohesive, informed policymaking across civilian and military domains. Breaking down siloes, she argued, is both a means and an end of capacity-building: it serves as a decision-making tool to collectively build responsible practices and as a trust-building measure across compartmentalized groups of stakeholders.
Exchanging Good Practices
During discussions on good practices, panelists shared specific examples and lessons learned that could guide international cooperation.
Ms. Nur Adan pointed to the civilian sector’s successful practices like model cards and other transparency measures, which document detailed information about AI models’ intended uses, ethical considerations, and potential risks. Such practices could significantly enhance transparency and accountability in military contexts.
Major Hassan emphasized Kenya’s extensive and inclusive public consultation process in developing its national AI strategy, which resulted in more than 500 comments and suggestions from diverse stakeholders. This approach not only built trust among stakeholders but also ensured that the national strategy genuinely reflected societal values and concerns, thus offering a replicable model for other states.
Ms. Chiappini outlined Latin America’s regional dialogue, underscoring the shared commitment among States to maintain regional peace through proactive regulation of emerging military technologies. She argued that regional unity could serve as a powerful steppingstone toward broader international consensus, helping overcome fragmentation in global discussions.
Conclusion: Opportunities and Challenges Ahead
Panelists concluded by identifying infrastructure and trust-building as both primary challenges and opportunities for the future governance of military AI. A clear consensus emerged around the importance of robust, shared infrastructures and trustworthy verification standards as critical elements in fostering international collaboration.
The panelists collectively highlighted the indispensable human factor, reminding policymakers that technology must support and enhance—not replace—human judgment and ethical accountability in military operations.
The MAPS Dialogues underscored the complexity of governing AI in military contexts, illustrating that achieving responsible AI use requires comprehensive technical, ethical, and socio-political commitments from all stakeholders. Ongoing international dialogue and cooperation remain essential as the global community navigates this challenging but critical domain.